翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

lip reading : ウィキペディア英語版
lip reading

Lip reading, also known as lipreading or speechreading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue when normal sound is not available, relying also on information provided by the context, knowledge of the language, and any residual hearing. Although ostensibly used by deaf and hard-of-hearing people, people with normal hearing generally process visual information from the moving mouth, whether they are aware of it or not.
== Process ==
Although speech perception is considered to be an auditory skill, it is intrinsically multimodal, since producing speech requires the speaker to make movements of the lips, teeth and tongue which are often visible in face-to-face communication. So information from the lips and face supports aural comprehension, and most fluent listeners of a language are sensitive to seen speech actions (see McGurk effect). The extent to which people make use of seen speech actions varies with the visibility of the speech action and the knowledge and skill of the perceiver.
Lipreading while listening to spoken language provides the redundant audiovisual cues necessary to initially learn language, as evidenced by Lewkowicz who in his studies determined that babies between 4 and 8 months of age pay special attention to mouth movements when learning to speak both native and nonnative languages. While after 12 months of age enough audiovisual cues have been attained that they no longer have to look at the mouth when encountering a native language, hearing a nonnative language spoken again prompts this shift to visual and auditory engagement by way of lipreading and listening in order to process, understand and produce speech.
Research has shown that, as expected, deaf adults are better at lipreading than hearing adults due to their increased practice and heavier reliance on lip reading in order to understand speech. However, when the same research team conducted a similar study with children it was determined that deaf and hearing children have similar lip reading skills. It is only after 14 years of age that skill levels between deaf and hearing children begin to differentiate significantly, indicating that lipreading skill in early life is independent of auditory capability. This may indicate a deterioration in lip reading ability with age for hearing individuals or an increased efficiency in lip reading ability with age for deaf individuals.〔(【引用サイトリンク】url=http://www.ucl.ac.uk/dcal/dcal-news/story11 )
Lipreading has been proven to activate not only the visual cortex of the brain, but also the auditory cortex in the same way when actual speech is heard. Research has shown that rather than have clearcut different regions of the brain dedicated to different senses, the brain works in a mutisensory fashion, thus making a coordinated effort to consider and combine all the different types of speech information it receives, regardless of modality. Therefore, as hearing captures more articulatory detail than sight or touch the brain uses speech and sound to compensate for other senses.
Speechreading is limited, however, in that many phonemes share the same viseme and thus are impossible to distinguish from visual information alone. Sounds whose place of articulation is deep inside the mouth or throat are not detectable, such as glottal consonants and most gestures of the tongue. Voiced and unvoiced pairs look identical, such as () and (), () and (), () and (), () and (), and () and (); likewise for nasalisation (e.g. () vs. ()). It has been estimated that only 30% to 40% of sounds in the English language are distinguishable from sight alone.
Thus, for example, the phrase "where there's life, there's hope" looks identical to "where's the lavender soap" in most English dialects. Author Henry Kisor titled his book ''What's That Pig Outdoors?: A Memoir of Deafness'' in reference to mishearing the question, "What's that big loud noise?" He used this example in the book to discuss the shortcomings of speechreading.
As a result, a speechreader must depend heavily on cues from the environment, from the context of the communication, and a knowledge of what is likely to be said. It is much easier to speechread customary phrases such as greetings or a connected discourse on a familiar topic than utterances that appear in isolation and without supporting information, such as the name of a person never met before.
Difficult scenarios in which to speechread include:
* Lack of a clear view of the speaker's lips. This includes:
*
* obstructions such as moustaches or hands in front of the mouth
*
* the speaker's head turned aside or away
*
* dark environment
*
* a bright back-lighting source such as a window behind the speaker, darkening the face.
* Group discussions, especially when multiple people are talking in quick succession. The challenge here is to know where to look.
* use of an unusual tone or rhythm of speech by the speaker

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「lip reading」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.